267 research outputs found
XVTP3D: Cross-view Trajectory Prediction Using Shared 3D Queries for Autonomous Driving
Trajectory prediction with uncertainty is a critical and challenging task for
autonomous driving. Nowadays, we can easily access sensor data represented in
multiple views. However, cross-view consistency has not been evaluated by the
existing models, which might lead to divergences between the multimodal
predictions from different views. It is not practical and effective when the
network does not comprehend the 3D scene, which could cause the downstream
module in a dilemma. Instead, we predicts multimodal trajectories while
maintaining cross-view consistency. We presented a cross-view trajectory
prediction method using shared 3D Queries (XVTP3D). We employ a set of 3D
queries shared across views to generate multi-goals that are cross-view
consistent. We also proposed a random mask method and coarse-to-fine
cross-attention to capture robust cross-view features. As far as we know, this
is the first work that introduces the outstanding top-down paradigm in BEV
detection field to a trajectory prediction problem. The results of experiments
on two publicly available datasets show that XVTP3D achieved state-of-the-art
performance with consistent cross-view predictions.Comment: 11 pages, 6 figures, accepted by IJCAI 2
Identifying TNF and IL6 as potential hub genes and targeted drugs associated with scleritis: A bio-informative report
BackgroundScleritis is a serious inflammatory eye disease that can lead to blindness. The etiology and pathogenesis of scleritis remain unclear, and increasing evidence indicates that some specific genes and proteins are involved. This study aimed to identify pivotal genes and drug targets for scleritis, thus providing new directions for the treatment of this disease.MethodsWe screened candidate genes and proteins associated with scleritis by text-mining the PubMed database using Python, and assessed their functions by using the DAVID database. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were used to identify the functional enrichment of these genes and proteins. Then, the hub genes were identified with CytoHubba and assessed by protein-protein interaction (PPI) network analysis. And the serum from patients with active scleritis and healthy subjects were used for the validation of hub genes. Finally, the DGIdb database was used to predict targeted drugs for the hub genes for treating scleritis.ResultsA total of 56 genes and proteins were found to be linked to scleritis, and 65 significantly altered pathways were identified in the KEGG analysis (FDR < 0.05). Most of the top five pathways involved the categories “Rheumatoid arthritis,” “Inflammatory bowel disease”, “Type I diabetes mellitus,” and “Graft-versus-host disease”. TNF and IL6 were considered to be the top 2 hub genes through CytoHubba. Based on our serum samples, hub genes are expressed at high levels in active scleritis. Five scleritis-targeting drugs were found among 88 identified drugs.ConclusionsThis study provides key genes and drug targets related to scleritis through bioinformatics analysis. TNF and IL6 are considered key mediators and possible drug targets of scleritis. Five drug candidates may play an important role in the diagnosis and treatment of scleritis in the future, which is worthy of the further experimental and clinical study
Random Forest in Clinical Metabolomics for Phenotypic Discrimination and Biomarker Selection
Metabolomic data analysis becomes increasingly challenging when dealing with clinical samples with diverse demographic and genetic backgrounds and various pathological conditions or treatments. Although many classification tools, such as projection to latent structures (PLS), support vector machine (SVM), linear discriminant analysis (LDA), and random forest (RF), have been successfully used in metabolomics, their performance including strengths and limitations in clinical data analysis has not been clear to researchers due to the lack of systematic evaluation of these tools. In this paper we comparatively evaluated the four classifiers, PLS, SVM, LDA, and RF, in the analysis of clinical metabolomic data derived from gas chromatography mass spectrometry platform of healthy subjects and patients diagnosed with colorectal cancer, where cross-validation, R2/Q2 plot, receiver operating characteristic curve, variable reduction, and Pearson correlation were performed. RF outperforms the other three classifiers in the given clinical data sets, highlighting its comparative advantages as a suitable classification and biomarker selection tool for clinical metabolomic data analysis
Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
We present CM3Leon (pronounced "Chameleon"), a retrieval-augmented,
token-based, decoder-only multi-modal language model capable of generating and
infilling both text and images. CM3Leon uses the CM3 multi-modal architecture
but additionally shows the extreme benefits of scaling up and tuning on more
diverse instruction-style data. It is the first multi-modal model trained with
a recipe adapted from text-only language models, including a large-scale
retrieval-augmented pre-training stage and a second multi-task supervised
fine-tuning (SFT) stage. It is also a general-purpose model that can do both
text-to-image and image-to-text generation, allowing us to introduce
self-contained contrastive decoding methods that produce high-quality outputs.
Extensive experiments demonstrate that this recipe is highly effective for
multi-modal models. CM3Leon achieves state-of-the-art performance in
text-to-image generation with 5x less training compute than comparable methods
(zero-shot MS-COCO FID of 4.88). After SFT, CM3Leon can also demonstrate
unprecedented levels of controllability in tasks ranging from language-guided
image editing to image-controlled generation and segmentation
Cardiovascular risk and events in 17 low-, middle-, and high-income countries
BACKGROUND:
More than 80% of deaths from cardiovascular disease are estimated to occur in
low-income and middle-income countries, but the reasons are unknown.
METHODS:
We enrolled 156,424 persons from 628 urban and rural communities in 17 countries
(3 high-income, 10 middle-income, and 4 low-income countries) and assessed
their cardiovascular risk using the INTERHEART Risk Score, a validated score for
quantifying risk-factor burden without the use of laboratory testing (with higher
scores indicating greater risk-factor burden). Participants were followed for incident
cardiovascular disease and death for a mean of 4.1 years.
RESULTS:
The mean INTERHEART Risk Score was highest in high-income countries, intermediate
in middle-income countries, and lowest in low-income countries (P<0.001).
However, the rates of major cardiovascular events (death from cardiovascular
causes, myocardial infarction, stroke, or heart failure) were lower in high-income
countries than in middle- and low-income countries (3.99 events per 1000 personyears
vs. 5.38 and 6.43 events per 1000 person-years, respectively; P<0.001). Case
fatality rates were also lowest in high-income countries (6.5%, 15.9%, and 17.3%
in high-, middle-, and low-income countries, respectively; P = 0.01). Urban communities
had a higher risk-factor burden than rural communities but lower rates
of cardiovascular events (4.83 vs. 6.25 events per 1000 person-years, P<0.001) and
case fatality rates (13.52% vs. 17.25%, P<0.001). The use of preventive medications
and revascularization procedures was significantly more common in high-income
countries than in middle- or low-income countries (P<0.001).
CONCLUSIONS:
Although the risk-factor burden was lowest in low-income countries, the rates of
major cardiovascular disease and death were substantially higher in low-income
countries than in high-income countries. The high burden of risk factors in highincome
countries may have been mitigated by better control of risk factors and
more frequent use of proven pharmacologic therapies and revascularization.
(Funded by the Population Health Research Institute and others.)IS
The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe
The preponderance of matter over antimatter in the early Universe, the
dynamics of the supernova bursts that produced the heavy elements necessary for
life and whether protons eventually decay --- these mysteries at the forefront
of particle physics and astrophysics are key to understanding the early
evolution of our Universe, its current state and its eventual fate. The
Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed
plan for a world-class experiment dedicated to addressing these questions. LBNE
is conceived around three central components: (1) a new, high-intensity
neutrino source generated from a megawatt-class proton accelerator at Fermi
National Accelerator Laboratory, (2) a near neutrino detector just downstream
of the source, and (3) a massive liquid argon time-projection chamber deployed
as a far detector deep underground at the Sanford Underground Research
Facility. This facility, located at the site of the former Homestake Mine in
Lead, South Dakota, is approximately 1,300 km from the neutrino source at
Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino
charge-parity symmetry violation and mass ordering effects. This ambitious yet
cost-effective design incorporates scalability and flexibility and can
accommodate a variety of upgrades and contributions. With its exceptional
combination of experimental configuration, technical capabilities, and
potential for transformative discoveries, LBNE promises to be a vital facility
for the field of particle physics worldwide, providing physicists from around
the globe with opportunities to collaborate in a twenty to thirty year program
of exciting science. In this document we provide a comprehensive overview of
LBNE's scientific objectives, its place in the landscape of neutrino physics
worldwide, the technologies it will incorporate and the capabilities it will
possess.Comment: Major update of previous version. This is the reference document for
LBNE science program and current status. Chapters 1, 3, and 9 provide a
comprehensive overview of LBNE's scientific objectives, its place in the
landscape of neutrino physics worldwide, the technologies it will incorporate
and the capabilities it will possess. 288 pages, 116 figure
Posteriori analysis on IceCube double pulse tau neutrino candidates
The IceCube Neutrino Observatory at the South Pole detects Cherenkov light emitted by charged secondary particles created by primary neutrino interactions. Double pulse waveforms can arise from charged current interactions of astrophysical tau neutrinos with nucleons in the ice and the subsequent decay of tau leptons. The previous 8-year tau double pulse analysis found three tau neutrino candidate events. Among them, the most promising one observed in 2014 is located very near the dust layer in the middle of the detector. A posterior analysis on this event will be presented in this paper, using a new ice model treatment with continuously varying nuisance parameters to do the targeted Monte Carlo re-simulation for tau and other background neutrino ensembles. The impact of different ice models on the expected signal and background statistics will also be discussed
- …